Model Testing
To perform model testing from Katonic Platform the user can use Locust
.
What is Locust?โ
Locust is an easy-to-use, scriptable and scalable performance testing tool.
You define the behavior of your users in regular Python code, instead of being stuck in a UI or restrictive domain-specific language.
This makes Locust infinitely expandable and very developer friendly.
How to Test Models from Locust using Katonic Platformโ
- Log-in into our Platform.
- Once you are logged into the platform and you are in the Dashboard click on the Workspace tab.
- Click on + Create workspace to create a new workspace.
- Fill in the details for the workspace (name, Environment, Webapp, Image, Additional Port, Resources)
- Enter the Workspace name.
- Select VS Code as Environment.
- Select Web-app as Streamlit
- Select Image as py38-streamlit1.14
- Select the Addition port as 8050 or any port the user wants to use.
- Select
Resources
as per the requirement of the project. Try for large resources if you want to work with more number of requests or workers.
- Click on the Create button to create the new Workspace
- Wait till the workspace Processing state changes to Running.
- Once the state changes to
Running
click the Connect button to start.
- Once you are connected to VS Code, Create the Python file apiloadtesting.py and one requirements.txt for installing the required packages from the requirements.txt file.
- Write your sample apiloadtesting.py code there or copy below sample code.
from locust import HttpUser, task, between
import json
import logging
from http.client import HTTPConnection
from locust.contrib.fasthttp import FastHttpUser
HTTPConnection.debuglevel = 1
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True
token = "The User Model API Token needs to paste here"
data = "The Data used in the model"
class RequestHandled(FastHttpUser):
@task
def get_pred(self):
pred_data = {"data": data}
self.client.post("/predict", json = pred_data,headers = {"Authorization":token},catch_response=False)
- Create the
requirements.txt
file with all required dependencies as below:
locust
- Use the command to install the packages required for locust.
pip install -r requirements.txt
- Make sure to change the Data and Token in
apiloadtesting.py
before running the command.
Note: The data will also differ based on the model input which we are trying to test.
To change the data and token follow the steps:
- Inside
apiloadtesting.py
add the token in line number 15 with the new API token the user generated after the model is deployed.
- Inside
token = "The User Model API Token needs to paste here"
data = "The Data used in the model"
- Inside
apiloadtesting.py
add the data in line number 16 with the data you are using for the model.
- Inside
- After installing all the packages and adding the data and token inside the
apiloadtesting.py
file, the user needs to run the locust for load testing, enter the commands in the terminal.
locust -f apiloadtesting.py --web-port=8050 --web-host=0.0.0.0
- If you want to test with a Master and worker combination, please use the below commands:
- Terminal 1
locust -f load_testing.py --master --web-port=8050 --web-host=0.0.0.0
- Terminal 2
locust -f load_testing.py --worker
- Terminal 3
locust -f load_testing.py --worker
- After running the command in the terminal it will show the user Starting web interface at http://0.0.0.0:8050 & Starting Locust (version)
- once the locust is running in the specific port go back to the Katonic Platform (Workspaces) and click on the Live App button.
- When you click on the Live app, a pop-up will appear stating, "App is running, please click here." Click on click here, and Locust will be running in a new tab with the UI interface.
- Once the User is on the Locust UI interface it will show some options such as Number of users (peak concurrency), Spawn rate (users started/second) & Host (e.g. http://www.example.com)
- In Host user need to paste the API link (Request URL) till v1 and paste it into the host input box and click on the Start Swarming option to test the model.
- Once the test is started the user can see the outputs as graphs and charts showing RPS, failed requests etc.
Note: You can check the Failures from the Failure Tab next to Charts Tab to check what is getting failed.